177 research outputs found

    Generalizability of Deep Adult Lung Segmentation Models to the Pediatric Population: A Retrospective Study

    Full text link
    Lung segmentation in chest X-rays (CXRs) is an important prerequisite for improving the specificity of diagnoses of cardiopulmonary diseases in a clinical decision support system. Current deep learning (DL) models for lung segmentation are trained and evaluated on CXR datasets in which the radiographic projections are captured predominantly from the adult population. However, the shape of the lungs is reported to be significantly different for pediatrics across the developmental stages from infancy to adulthood. This might result in age-related data domain shifts that would adversely impact lung segmentation performance when the models trained on the adult population are deployed for pediatric lung segmentation. In this work, our goal is to analyze the generalizability of deep adult lung segmentation models to the pediatric population and improve performance through a systematic combinatorial approach consisting of CXR modality-specific weight initializations, stacked generalization, and an ensemble of the stacked generalization models. Novel evaluation metrics consisting of Mean Lung Contour Distance and Average Hash Score are proposed in addition to the Multi-scale Structural Similarity Index Measure, Intersection of Union, and Dice metrics to evaluate segmentation performance. We observed a significant improvement (p < 0.05) in cross-domain generalization through our combinatorial approach. This study could serve as a paradigm to analyze the cross-domain generalizability of deep segmentation models for other medical imaging modalities and applications.Comment: 11 pages, 7 figures, and 8 table

    Content–based fMRI Brain Maps Retrieval

    Get PDF
    The statistical analysis of functional magnetic resonance imaging (fMRI) is used to extract functional data of cerebral activation during a given experimental task. It allows for assessing changes in cerebral function related to cerebral activities. This methodology has been widely used and a few initiatives aim to develop shared data resources. Searching these data resources for a specific research goal remains a challenging problem. In particular, work is needed to create a global content–based (CB) fMRI retrieval capability. This work presents a CB fMRI retrieval approach based on the brain activation maps extracted using Probabilistic Independent Component Analysis (PICA). We obtained promising results on data from a variety of experiments which highlight the potential of the system as a tool that provides support for finding hidden similarities between brain activation maps

    A bone suppression model ensemble to improve COVID-19 detection in chest X-rays

    Full text link
    Chest X-ray (CXR) is a widely performed radiology examination that helps to detect abnormalities in the tissues and organs in the thoracic cavity. Detecting pulmonary abnormalities like COVID-19 may become difficult due to that they are obscured by the presence of bony structures like the ribs and the clavicles, thereby resulting in screening/diagnostic misinterpretations. Automated bone suppression methods would help suppress these bony structures and increase soft tissue visibility. In this study, we propose to build an ensemble of convolutional neural network models to suppress bones in frontal CXRs, improve classification performance, and reduce interpretation errors related to COVID-19 detection. The ensemble is constructed by (i) measuring the multi-scale structural similarity index (MS-SSIM) score between the sub-blocks of the bone-suppressed image predicted by each of the top-3 performing bone-suppression models and the corresponding sub-blocks of its respective ground truth soft-tissue image, and (ii) performing a majority voting of the MS-SSIM score computed in each sub-block to identify the sub-block with the maximum MS-SSIM score and use it in constructing the final bone-suppressed image. We empirically determine the sub-block size that delivers superior bone suppression performance. It is observed that the bone suppression model ensemble outperformed the individual models in terms of MS-SSIM and other metrics. A CXR modality-specific classification model is retrained and evaluated on the non-bone-suppressed and bone-suppressed images to classify them as showing normal lungs or other COVID-19-like manifestations. We observed that the bone-suppressed model training significantly outperformed the model trained on non-bone-suppressed images toward detecting COVID-19 manifestations.Comment: 29 pages, 10 figures, 4 table

    Graph Representation for Content–based fMRI Activation Map Retrieval

    Get PDF
    The use of functional magnetic resonance imaging (fMRI) to visualize brain activity in a non–invasive way is an emerging technique in neuroscience. It is expected that data sharing and the development of better search tools for the large amount of existing fMRI data may lead to a better understanding of the brain through the use of larger sample sizes or allowing collaboration among experts in various areas of expertise. In fact, there is a trend toward such sharing of fMRI data, but there is a lack of tools to effectively search fMRI data repositories, a factor which limits further research use of these repositories. Content–based (CB) fMRI brain map retrieval tools may alleviate this problem. A CB–fMRI brain map retrieval tool queries a brain activation map collection (containing brain maps showing activation areas after a stimulus is applied to a subject), and retrieves relevant brain activation maps, i.e. maps that are similar to the query brain activation map. In this work, we propose a graph–based representation for brain activation maps with the goal of improving retrieval accuracy as compared to existing methods. In this brain graph, nodes represent different specialized regions of a functional–based brain atlas. We evaluated our approach using human subject data obtained from eight experiments where a variety of stimuli were applied

    Using Crowdsourcing for Multi-label Biomedical Compound Figure Annotation

    Get PDF
    Information analysis or retrieval for images in the biomedical literature needs to deal with a large amount of compound figures (figures containing several subfigures), as they constitute probably more than half of all images in repositories such as PubMed Central, which was the data set used for the task. The ImageCLEFmed benchmark proposed among other tasks in 2015 and 2016 a multi-label classification task, which aims at evaluating the automatic classification of figures into 30 image types. This task was based on compound figures and thus the figures were distributed to participants as compound figures but also in a separated form. Therefore, the generation of a gold standard was required, so that algorithms of participants can be evaluated and compared. This work presents the process carried out to generate the multi-labels of ∼2650 compound figures using a crowdsourcing approach. Automatic algorithms to separate compound figures into subfigures were used and the results were then validated or corrected via crowdsourcing. The image types (MR, CT, X–ray, ...) were also annotated by crowdsourcing including detailed quality control. Quality control is necessary to insure quality of the annotated data as much as possible. ∼625 h were invested with a cost of ∼870$

    Automatic Segmentation of Subfigure Image Panels for Multimodal Biomedical Document Retrieval

    Get PDF
    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. The task of automatically finding the images in a scientific article that are most useful for the purpose of determining relevance to a clinical situation is traditionally done using text and is quite challenging. We propose to improve this by associating image features from the entire image and from relevant regions of interest with biomedical concepts described in the figure caption or discussion in the article. However, images used in scientific article figures are often composed of multiple panels where each sub-figure (panel) is referenced in the caption using alphanumeric labels, e.g. Figure 1(a), 2(c), etc. It is necessary to separate individual panels from a multi-panel figure as a first step toward automatic annotation of images. In this work we present methods that add make robust our previous efforts reported here. Specifically, we address the limitation in segmenting figures that do not exhibit explicit inter-panel boundaries, e.g. illustrations, graphs, and charts. We present a novel hybrid clustering algorithm based on particle swarm optimization (PSO) with fuzzy logic controller (FLC) to locate related figure components in such images. Results from our evaluation are very promising with 93.64% panel detection accuracy for regular (non-illustration) figure images and 92.1% accuracy for illustration images. A computational complexity analysis also shows that PSO is an optimal approach with relatively low computation time. The accuracy of separating these two type images is 98.11% and is achieved using decision tree

    Performance evaluation of deep neural ensembles toward malaria parasite detection in thin-blood smear images

    Get PDF
    Background Malaria is a life-threatening disease caused by Plasmodium parasites that infect the red blood cells (RBCs). Manual identification and counting of parasitized cells in microscopic thick/thin-film blood examination remains the common, but burdensome method for disease diagnosis. Its diagnostic accuracy is adversely impacted by inter/intra-observer variability, particularly in large-scale screening under resource-constrained settings. Introduction State-of-the-art computer-aided diagnostic tools based on data-driven deep learning algorithms like convolutional neural network (CNN) has become the architecture of choice for image recognition tasks. However, CNNs suffer from high variance and may overfit due to their sensitivity to training data fluctuations. Objective The primary aim of this study is to reduce model variance, improve robustness and generalization through constructing model ensembles toward detecting parasitized cells in thin-blood smear images. Methods We evaluate the performance of custom and pretrained CNNs and construct an optimal model ensemble toward the challenge of classifying parasitized and normal cells in thin-blood smear images. Cross-validation studies are performed at the patient level to ensure preventing data leakage into the validation and reduce generalization errors. The models are evaluated in terms of the following performance metrics: (a) Accuracy; (b) Area under the receiver operating characteristic (ROC) curve (AUC); (c) Mean squared error (MSE); (d) Precision; (e) F-score; and (f) Matthews Correlation Coefficient (MCC). Results It is observed that the ensemble model constructed with VGG-19 and SqueezeNet outperformed the state-of-the-art in several performance metrics toward classifying the parasitized and uninfected cells to aid in improved disease screening. Conclusions Ensemble learning reduces the model variance by optimally combining the predictions of multiple models and decreases the sensitivity to the specifics of training data and selection of training algorithms. The performance of the model ensemble simulates real-world conditions with reduced variance, overfitting and leads to improved generalization
    • …
    corecore